Aurora
Towards Next-Generation Medical Agent: How o1 is Reshaping Decision-Making in Medical Scenarios
Xu, Shaochen, Zhou, Yifan, Liu, Zhengliang, Wu, Zihao, Zhong, Tianyang, Zhao, Huaqin, Li, Yiwei, Jiang, Hanqi, Pan, Yi, Chen, Junhao, Lu, Jin, Zhang, Wei, Zhang, Tuo, Zhang, Lu, Zhu, Dajiang, Li, Xiang, Liu, Wei, Li, Quanzheng, Sikora, Andrea, Zhai, Xiaoming, Xiang, Zhen, Liu, Tianming
Artificial Intelligence (AI) has become essential in modern healthcare, with large language models (LLMs) offering promising advances in clinical decision-making. Traditional model-based approaches, including those leveraging in-context demonstrations and those with specialized medical fine-tuning, have demonstrated strong performance in medical language processing but struggle with real-time adaptability, multi-step reasoning, and handling complex medical tasks. Agent-based AI systems address these limitations by incorporating reasoning traces, tool selection based on context, knowledge retrieval, and both short- and long-term memory. These additional features enable the medical AI agent to handle complex medical scenarios where decision-making should be built on real-time interaction with the environment. Therefore, unlike conventional model-based approaches that treat medical queries as isolated questions, medical AI agents approach them as complex tasks and behave more like human doctors. In this paper, we study the choice of the backbone LLM for medical AI agents, which is the foundation for the agent's overall reasoning and action generation. In particular, we consider the emergent o1 model and examine its impact on agents' reasoning, tool-use adaptability, and real-time information retrieval across diverse clinical scenarios, including high-stakes settings such as intensive care units (ICUs). Our findings demonstrate o1's ability to enhance diagnostic accuracy and consistency, paving the way for smarter, more responsive AI tools that support better patient outcomes and decision-making efficacy in clinical practice.
Structured prompt interrogation and recursive extraction of semantics (SPIRES): A method for populating knowledge bases using zero-shot learning
Caufield, J. Harry, Hegde, Harshad, Emonet, Vincent, Harris, Nomi L., Joachimiak, Marcin P., Matentzoglu, Nicolas, Kim, HyeongSik, Moxon, Sierra A. T., Reese, Justin T., Haendel, Melissa A., Robinson, Peter N., Mungall, Christopher J.
Creating knowledge bases and ontologies is a time consuming task that relies on a manual curation. AI/NLP approaches can assist expert curators in populating these knowledge bases, but current approaches rely on extensive training data, and are not able to populate arbitrary complex nested knowledge schemas. Here we present Structured Prompt Interrogation and Recursive Extraction of Semantics (SPIRES), a Knowledge Extraction approach that relies on the ability of Large Language Models (LLMs) to perform zero-shot learning (ZSL) and general-purpose query answering from flexible prompts and return information conforming to a specified schema. Given a detailed, user-defined knowledge schema and an input text, SPIRES recursively performs prompt interrogation against GPT-3+ to obtain a set of responses matching the provided schema. SPIRES uses existing ontologies and vocabularies to provide identifiers for all matched elements. We present examples of use of SPIRES in different domains, including extraction of food recipes, multi-species cellular signaling pathways, disease treatments, multi-step drug mechanisms, and chemical to disease causation graphs. Current SPIRES accuracy is comparable to the mid-range of existing Relation Extraction (RE) methods, but has the advantage of easy customization, flexibility, and, crucially, the ability to perform new tasks in the absence of any training data. This method supports a general strategy of leveraging the language interpreting capabilities of LLMs to assemble knowledge bases, assisting manual knowledge curation and acquisition while supporting validation with publicly-available databases and ontologies external to the LLM. SPIRES is available as part of the open source OntoGPT package: https://github.com/ monarch-initiative/ontogpt.
Harmonization Across Imaging Locations(HAIL): One-Shot Learning for Brain MRI
Parida, Abhijeet, Jiang, Zhifan, Anwar, Syed Muhammad, Foreman, Nicholas, Stence, Nicholas, Fisher, Michael J., Packer, Roger J., Avery, Robert A., Linguraru, Marius George
For machine learning-based prognosis and diagnosis of rare diseases, such as pediatric brain tumors, it is necessary to gather medical imaging data from multiple clinical sites that may use different devices and protocols. Deep learning-driven harmonization of radiologic images relies on generative adversarial networks (GANs). However, GANs notoriously generate pseudo structures that do not exist in the original training data, a phenomenon known as "hallucination". To prevent hallucination in medical imaging, such as magnetic resonance images (MRI) of the brain, we propose a one-shot learning method where we utilize neural style transfer for harmonization. At test time, the method uses one image from a clinical site to generate an image that matches the intensity scale of the collaborating sites. Our approach combines learning a feature extractor, neural style transfer, and adaptive instance normalization. We further propose a novel strategy to evaluate the effectiveness of image harmonization approaches with evaluation metrics that both measure image style harmonization and assess the preservation of anatomical structures. Experimental results demonstrate the effectiveness of our method in preserving patient anatomy while adjusting the image intensities to a new clinical site. Our general harmonization model can be used on unseen data from new sites, making it a valuable tool for real-world medical applications and clinical trials.
Aurora successfully demonstrates AV fault management system
A reliable fault management system is essential for safely operating autonomous vehicle fleets for commercial customers, helping to pave the way to full commercialization. With this in mind, self-driving tech firm Aurora Innovation Inc recently delivered its Beta 3.0 product update and demonstrated its fault management system – specifically the Aurora Driver's ability to detect system issues and respond by safely pulling over to the side of the road without any human involvement. The company says it achieved its milestone ahead of schedule, following its implementation on Aurora-powered trucks operating on public roads at highway speeds. "Any of a number of factors, from blown tires to damaged sensors, can compromise a vehicle while on the road," stated Sterling Anderson, Aurora co-founder and chief product officer. "Safely detecting and responding to those issues is essential for a reliable self-driving product operating at scale. Our fault management system lays the groundwork for safe autonomous operations without vehicle operators, chase vehicles or remote human fallback systems."
Waymo CTO on the company's past, present and what comes next
A decade ago, about a dozen or so engineers gathered at Google's main Mountain View campus on Charleston Road to work on Project Chauffeur, a secret endeavor housed under the tech giant's moonshot factory X. Project Chauffeur -- popularly know as the "Google self-driving car project" -- kicked off in January 2009. It would eventually graduate from its project status to become a standalone company called Waymo in 2016. The project, originally led by Sebastian Thrun, would help spark an entire ecosystem that is still developing today. Venture capitalists took notice and stampeded in, auto analysts shifted gears and regulators, urban planners and policy wonks started collecting data and considering the impact of AVs on cities. The project would also become a springboard for a number of engineers who would go on to create their own companies. It's a list that includes Aurora co-founder Chris Urmson, Argo AI co-founder Bryan Salesky and Anthony Levandowski, who helped launch Otto and more recently Pronto.ai.